Monotonically Overrelaxed EM Algorithms
نویسنده
چکیده
We explore the idea of overrelaxation for accelerating the expectation-maximization (EM) algorithm, focusing on preserving its simplicity and monotonic convergence properties. It is shown that in many cases a trivial modification in the M-step results in an algorithm that maintains monotonic increase in the log-likelihood, but can have an appreciably faster convergence rate, especially when EM is very slow. The method is applicable to more general fixed point algorithms. Its simplicity and effectiveness are illustrated with several statistical problems, including probit regression, least absolute deviations regression, Poisson inverse problems, and finite mixtures.
منابع مشابه
Statistica Sinica 5(1995), 41-54 CONVERGENCE IN NORM FOR ALTERNATING EXPECTATION-MAXIMIZATION (EM) TYPE ALGORITHMS
We provide a su cient condition for convergence of a general class of alternating estimation-maximization (EM) type continuous-parameter estimation algorithms with respect to a given norm. This class includes EM, penalized EM, Green's OSL-EM, and other approximate EM algorithms. The convergence analysis can be extended to include alternating coordinate-maximization EM algorithms such as Meng an...
متن کاملAsymptotic Convergence Properties of EM Type Algorithms
We analyze the asymptotic convergence properties of a general class of EM type algorithms for es timating an unknown parameter via alternating estimation and maximization As examples this class includes ML EM penalized ML EM Green s OSL EM and many other approximate EM al gorithms A theorem is given which provides conditions for monotone convergence with respect to a given norm and speci es an ...
متن کاملAdaptive Overrelaxed Bound Optimization Methods
We study a class of overrelaxed bound optimization algorithms, and their relationship to standard bound optimizers, such as ExpectationMaximization, Iterative Scaling, CCCP and Non-Negative Matrix Factorization. We provide a theoretical analysis of the convergence properties of these optimizers and identify analytic conditions under which they are expected to outperform the standard versions. B...
متن کاملFlexible and efficient implementations of Bayesian independent component analysis
In this paper we present an empirical Bayes method for flexible and efficient Independent Component Analysis (ICA). The method is flexible with respect to choice of source prior, dimensionality and positivity of the mixing matrix, and structure of the noise covariance matrix. The efficiency is ensured using parameter optimizers which are more advanced than the expectation maximization (EM) algo...
متن کاملSpace-alternating generalized expectation-maximization algorithm
The expectation-maximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional log-likelihood of a single unobservable complete data space, rather than maximizing the intractable likelihood function for the measured or incomplete data. EM algorithms update all paramete...
متن کامل